latest artificial intelligence benchmarking test
Google Dethrones NVIDIA With Split Results In Latest Artificial Intelligence Benchmarking Tests
Digital transformation is responsible for artificial intelligence workloads being created at an unprecedented scale. These workloads require corporations to collect and store mountains of data. Even as business intelligence is being extracted from current machine learning models, new data inflows are being used to create new models and update existing models. Building AI models is complex and expensive. It is also very much different than traditional software development.
- North America > Aruba (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.05)
- Semiconductors & Electronics (1.00)
- Information Technology > Hardware (0.74)
- Information Technology > Services (0.73)
NVIDIA Crushes Latest Artificial Intelligence Benchmarking Tests
In its third round of submissions, MLCommons released results for MLPerf Inference v1.0. MLPerf is a set of standard AI inference benchmarking tests using seven different applications. These seven tests include a range of workloads that include computer vision, medical imaging, recommender systems, speech recognition, and natural language processing. MLPerf benchmarking measures how fast a trained neural network can process data for each application and its form factor. The results allow unbiased comparison between systems.
- North America > Aruba (0.05)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.05)
- Semiconductors & Electronics (1.00)
- Information Technology > Hardware (0.82)